3,987 research outputs found
Analyzing the Distributions of the Stochastic Firm Growth Approach
Recently there has been a renewed interest in the study of firm size distributions and firm growth rate distributions. The stochastic firm growth approach builds on the assumption that firm growth rates are independent identically distributed and size is determined by a first order autoregressive process leaving the size distribution log-normal. This paper analyzes these distributional patterns in an empirical context questioning the foundation of the stochastic growth approach. In a cross section analysis of four industries using Danish data it is shown that the foundation for and the outcome of the stochastic firm growth process as it has so far been conceived are empirically far-fetched. In particular significant deviations from normality are found with respect to third and fourth moments.Firm Growth Rate and Size Distributions, Evolution of Industries
Green protein from locally grown crops (OK-Net EcoFeed Practice Abstract)
• Choose an appropriate type of green crop, such as clover-grass or alfalfa, with an expected high protein and amino acid content. Consider soil types and weather patterns to grow a crop with a good and high quality yield.
• Harvest the field at regular intervals in order to achieve good plant growth and to obtain batches with more high quality protein and less fibre
• Harvesting procedures, which minimise soil content in the green material obtained from the field are necessary to obtain good quality green protein and to avoid wear of machinery and technical equipment
• Cooperation with a bio-refinery plant is a prerequisite in order to concentrate the protein into a green paste that can be dried and used in poultry feed.
• If not dried, the wet green paste can be stored in closed containers/plastic bags in cool conditions for a shorter period.
• Chemical analysis of the green protein concentrate is important in order to replace other protein sources such as soya and to carry out the correct feed formulation. This can be done together with advisors or feed companies
Renormalized interactions with a realistic single particle basis
Neutron-rich isotopes in the sdpf space with Z < 15 require modifications to
derived effective interactions to agree with experimental data away from
stability. A quantitative justification is given for these modifications due to
the weakly bound nature of model space orbits via a procedure using realistic
radial wavefunctions and realistic NN interactions. The long tail of the radial
wavefunction for loosely bound single particle orbits causes a reduction in the
size of matrix elements involving those orbits, most notably for pairing matrix
elements, resulting in a more condensed level spacing in shell model
calculations. Example calculations are shown for 36Si and 38Si.Comment: 6 page
Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks
In this paper we propose the utterance-level Permutation Invariant Training
(uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning
based solution for speaker independent multi-talker speech separation.
Specifically, uPIT extends the recently proposed Permutation Invariant Training
(PIT) technique with an utterance-level cost function, hence eliminating the
need for solving an additional permutation problem during inference, which is
otherwise required by frame-level PIT. We achieve this using Recurrent Neural
Networks (RNNs) that, during training, minimize the utterance-level separation
error, hence forcing separated frames belonging to the same speaker to be
aligned to the same output stream. In practice, this allows RNNs, trained with
uPIT, to separate multi-talker mixed speech without any prior knowledge of
signal duration, number of speakers, speaker identity or gender. We evaluated
uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks
and found that uPIT outperforms techniques based on Non-negative Matrix
Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and
compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network
(DANet). Furthermore, we found that models trained with uPIT generalize well to
unseen speakers and languages. Finally, we found that a single model, trained
with uPIT, can handle both two-speaker, and three-speaker speech mixtures
Permutation Invariant Training of Deep Models for Speaker-Independent Multi-talker Speech Separation
We propose a novel deep learning model, which supports permutation invariant
training (PIT), for speaker independent multi-talker speech separation,
commonly known as the cocktail-party problem. Different from most of the prior
arts that treat speech separation as a multi-class regression problem and the
deep clustering technique that considers it a segmentation (or clustering)
problem, our model optimizes for the separation regression error, ignoring the
order of mixing sources. This strategy cleverly solves the long-lasting label
permutation problem that has prevented progress on deep learning based
techniques for speech separation. Experiments on the equal-energy mixing setup
of a Danish corpus confirms the effectiveness of PIT. We believe improvements
built upon PIT can eventually solve the cocktail-party problem and enable
real-world adoption of, e.g., automatic meeting transcription and multi-party
human-computer interaction, where overlapping speech is common.Comment: 5 page
- …